33 research outputs found

    The non-locality of n noisy Popescu-Rohrlich boxes

    Full text link
    We quantify the amount of non-locality contained in n noisy versions of so-called Popescu-Rohrlich boxes (PRBs), i.e., bipartite systems violating the CHSH Bell inequality maximally. Following the approach by Elitzur, Popescu, and Rohrlich, we measure the amount of non-locality of a system by representing it as a convex combination of a local behaviour, with maximal possible weight, and a non-signalling system. We show that the local part of n systems, each of which approximates a PRB with probability 1ϵ1-\epsilon, is of order Θ(ϵn/2)\Theta(\epsilon^{\lceil n/2\rceil}) in the isotropic, and equal to (3ϵ)n(3\epsilon)^n in the maximally biased case.Comment: 14 pages, v2: published versio

    Minimal Complete Primitives for Secure Multi-Party Computation

    Get PDF
    The study of minimal cryptographic primitives needed to implement secure computation among two or more players is a fundamental question in cryptography. The issue of complete primitives for the case of two players has been thoroughly studied. However, in the multi-party setting, when there are n > 2 players and t of them are corrupted, the question of what are the simplest complete primitives remained open for t ≥ n/3. (A primitive is called complete if any computation can be carried out by the players having access only to the primitive and local computation.) In this paper we consider this question, and introduce complete primitives of minimal cardinality for secure multi-party computation. The cardinality issue (number of players accessing the primitive) is essential in settings where primitives are implemented by some other means, and the simpler the primitive the easier it is to realize. We show that our primitives are complete and of minimal cardinality possible for most case

    Tight Bounds for Protocols with Hybrid Security

    Get PDF
    We consider broadcast and multi-party computation (MPC) in the setting where a digital signature scheme and a respective public-key infrastructure (PKI) are given among the players. However, neither the signature scheme nor the PKI are fully trusted. The goal is to achieve unconditional (PKI- and signature-independent) security up to a certain threshold, and security beyond this threshold under stronger assumptions, namely, that the forgery of signatures is impossible and/or that the given PKI is not under adversarial control. We give protocols for broadcast and MPC that achieve an optimal trade-off between these different levels of security

    FairTraDEX: A Decentralised Exchange Preventing Value Extraction

    Get PDF
    We present FairTraDEX, a decentralized exchange (DEX) protocol based on frequent batch auctions (FBAs), which provides formal game-theoretic guarantees against extractable value. FBAs when run by a trusted third-party provide unique game-theoretic optimal strategies which ensure players are shown prices equal to the liquidity provider's fair price, excluding explicit, pre-determined fees. FairTraDEX replicates the key features of an FBA that provide these game-theoretic guarantees using a combination of set-membership in zero-knowledge protocols and an escrow-enforced commit-reveal protocol. We extend the results of FBAs to handle monopolistic and/or malicious liquidity providers. We provide real-world examples that demonstrate that the costs of executing orders in existing academic and industry-standard protocols become prohibitive as order size increases due to basic value extraction techniques, popularized as maximal extractable value. We further demonstrate that FairTraDEX protects against these execution costs, guaranteeing a fixed fee model independent of order size, the first guarantee of it's kind for a DEX protocol. We also provide detailed Solidity and pseudo-code implementations of FairTraDEX, making FairTraDEX a novel and practical contribution

    On the Number of Synchronous Rounds Required for Byzantine Agreement

    Get PDF
    Byzantine agreement is typically considered with respect to either a fully synchronous network or a fully asynchronous one. In the synchronous case, either t+1t+1 deterministic rounds are necessary in order to achieve Byzantine agreement or at least some expected large constant number of rounds. In this paper we examine the question of how many initial synchronous rounds are required for Byzantine agreement if we allow to switch to asynchronous operation afterwards. Let n=h+tn=h+t be the number of parties where hh are honest and tt are corrupted. As the main result we show that, in the model with a public-key infrastructure and signatures, d+O(1)d+O(1) deterministic synchronous rounds are sufficient where dd is the minimal integer such that nd>3(td)n-d>3(t-d). This improves over the t+1t+1 necessary deterministic rounds for almost all cases, and over the exact expected number of rounds in the non-deterministic case for many cases

    A Quantum solution to the Byzantine agreement problem

    Full text link
    We present a solution to an old and timely problem in distributed computing. Like Quantum Key Distribution (QKD), quantum channels make it possible to achieve taks classically impossible. However, unlike QKD, here the goal is not secrecy but agreement, and the adversary is not outside but inside the game, and the resources require qutrits.Comment: 4 pages, 1 figur

    Byzantine Agreement Given Partial Broadcast

    Get PDF
    This paper considers unconditionally secure protocols for reliable broadcast among a set of n players, where up to t of the players can be corrupted by a (Byzantine) adversary but the remaining h = n - t players remain honest. In the standard model with a complete, synchronous network of bilateral authenticated communication channels among the players, broadcast is achievable if and only if 2n/h < 3. We show that, by extending this model by the existence of partial broadcast channels among subsets of b players, global broadcast can be achieved if and only if the number h of honest players satisfies 2n/h < b + 1. Achievability is demonstrated by protocols with communication and computation complexities polynomial in the size of the network, i.e., in the number of partial broadcast channels. A respective characterization for the related consensus problem is also give

    Extended Validity and Consistency in Byzantine Agreement

    Get PDF
    A broadcast protocol allows a sender to distribute a value among a set of players such that it is guaranteed that all players receive the same value (consistency), and if the sender is honest, then all players receive the sender\u27s value (validity). Classical broadcast protocols for nn players provide security with respect to a fixed threshold t<n/3t<n/3, where both consistency and validity are guaranteed as long as at most tt players are corrupted, and no security at all is guaranteed as soon as t+1t+1 players are corrupted. Depending on the environment, validity or consistency may be the more important property. We generalize the notion of broadcast by introducing an additional threshold t+tt^+\ge t. In a {\em broadcast protocol with extended validity}, both consistency and validity are achieved when no more than tt players are corrupted, and validity is achieved even when up to t+t^+ players are corrupted. Similarly, we define {\em broadcast with extended consistency}. We prove that broadcast with extended validity as well as broadcast with extended consistency is achievable if and only if t+2t+<nt+2t^+<n (or t=0t=0). For example, six players can achieve broadcast when at most one player is corrupted (this result was known to be optimal), but they can even achieve consistency (or validity) when two players are corrupted. Furthermore, our protocols achieve {\em detection} in case of failure, i.e., if at most tt players are corrupted then broadcast is achieved, and if at most t+t^+ players are corrupted then broadcast is achieved or every player learns that the protocol failed. This protocol can be employed in the precomputation of a secure multi-party computation protocol, resulting in {\em detectable multi-party computation}, where up to tt corruptions can be tolerated and up to t+t^+ corruptions can either be tolerated or detected in the precomputation, for any t,t+t,t^+ with t+2t+<nt+2t^+<n

    Proof-of-Stake Blockchain Protocols with Near-Optimal Throughput

    Get PDF
    One of the most significant challenges in the design of blockchain protocols is increasing their transaction-processing throughput. In this work we put forth for the first time a formal execution model that enables to express transaction throughput while supporting formal security arguments regarding persistence and liveness. We then present a protocol in the proof-of-stake setting achieving near-optimal throughput under adaptive active corruption of any minority of the stake

    Ofelimos: Combinatorial Optimization via Proof-of-Useful-Work \\ A Provably Secure Blockchain Protocol

    Get PDF
    Minimizing the energy cost and carbon footprint of the Bitcoin blockchain and related protocols is one of the most widely identified open questions in the cryptocurrency space. Substituting the proof-of-work (PoW) primitive in Nakamoto\u27s longest chain protocol with a {\em proof of useful work} (PoUW) has been long theorized as an ideal solution in many respects but, to this day, the concept still lacks a convincingly secure realization. In this work we put forth Ofelimos, a novel PoUW-based block\-chain protocol whose consensus mechanism simultaneously realizes a decentralized optimization-problem solver. Our protocol is built around a novel local search algorithm, which we call Doubly Parallel Local Search (DPLS), that is especially crafted to suit implementation as the PoUW component of our blockchain protocol. We provide a thorough security analysis of our protocol and additionally present metrics that reflect the usefulness of the system. As an illustrative example we show how DPLS can implement a variant of WalkSAT and experimentally demonstrate its competitiveness with respect to a vanilla WalkSAT implementation. In this way, our work paves the way for safely using blockchain systems as generic optimization engines for a variety of hard optimization problems for which a publicly verifiable solution is desired
    corecore